Technological singularity

Kurzweil writes that, due to paradigm shifts, a trend of exponential growth extends from integrated circuits to earlier transistors, vacuum tubes, relays, and electromechanical computers.

A technological singularity is a hypothetical event occurring when technological progress becomes very rapid due to positive feedback, making the future after the Singularity qualitatively different and hard to predict. It has been suggested that one will occur during the 21st century, and there are several mechanisms by which a singularity could occur.[1][2]

Vernor Vinge proposed that the creation of smarter-than-human intelligence would represent a breakdown in humans' ability to model their future. Humans cannot, he says, predict the actions of more intelligent entities. He compared it to the breakdown of physics so that it cannot be used to model the space-time singularity beyond the event horizon of a black hole.[3]

The second conception of the singularity is that of an intelligence explosion, a term coined by I. J. Good.[4] Although technological progress has been accelerating, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.[5] However with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is more intelligent than humanity.[6] If smarter-than-human intelligences were invented, either through the amplification of human intelligence or artificial intelligence, it would bring to bear greater problem-solving and inventive skills than humans, then it could design a yet more capable machine, or re-write its source code to become more intelligent. This more capable machine then could design a machine of even greater capability. These iterations could accelerate, leading to recursive self improvement, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[7][8][9]

Futurist Ray Kurzweil generalized singularity to apply to the sudden growth of any technology, not just intelligence; and argues that singularity in the sense of sharply accelerating technological change is inevitably implied by a long-term pattern of accelerating change that generalizes Moore's law to technologies predating the integrated circuit, and includes material technology (especially as applied to nanotechnology), medical technology, and others.[10]

The term technological singularity reflects the idea that the change may happen suddenly, and that it is very difficult to predict how such a new world would operate.[11][12] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[13][14] as the issue has not been dealt with by most AGI researchers, although the topic of Friendly AI is investigated by the Singularity Institute for Artificial Intelligence and the Future of Humanity Institute.[11]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Jeff Hawkins, John Holland, Daniel Dennett, Jaron Lanier, and Gordon Moore, whose eponymous Moore's Law is often cited in support of the concept.[15][16]

Contents

History of the idea

In 1958, Stanisław Ulam wrote in reference[17] to a conversation with John von Neumann:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

In 1965, I. J. Good first wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unforeseen by their designers, and thus recursively augment themselves into far greater intelligences. The first such improvements might be small, but as the machine became more intelligent it would become better at becoming more intelligent, which could lead to a cascade of self-improvements and a sudden surge to superintelligence (or a singularity).

Mathematician and author Vernor Vinge greatly popularized Good’s notion of an intelligence explosion, addressing the topic in print in the January 1983 issue of Omni magazine.

In 1985 Ray Solomonoff introduced the notion of "infinity point"[18] in the time scale of artificial intelligence, analyzed the magnitude of the "future shock" that "we can expect from our AI expanded scientific community" and on social effects. Estimates were made "for when these milestones would occur, followed by some suggestions for the more effective utilization of the extremely rapid technological growth that is expected".

A 1993 article by Vinge, "The Coming Technological Singularity: How to Survive in the Post-Human Era",[19] contains the oft-quoted statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge refines his estimate of the time scales involved, adding, "I'll be surprised if this event occurs before 2005 or after 2030."

Vinge continues by predicting that superhuman intelligences, however created, will be able to enhance their own minds faster than the humans that created them. "When greater-than-human intelligence drives progress," Vinge writes, "that progress will be much more rapid." This feedback loop of self-improving intelligence, he predicts, will cause large amounts of technological progress within a short period, and that the creation of smarter-than-human intelligence represented a breakdown in humans' ability to model their future. His argument was that authors cannot write realistic characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express. Vinge named this event "the Singularity". In 1993, Vernor Vinge associated the Singularity more explicitly with I. J. Good's intelligence explosion, and tried to project the arrival time of artificial intelligence (AI) using Moore's law, which thereafter came to be associated with the "Singularity" concept.

Aubrey de Grey has applied the term the "Methuselarity"[20] to the point at which medical technology improves so fast that expected human lifespan increases by more than one year per year.

Robin Hanson, taking "singularity" to refer to sharp increases in the exponent of economic growth, lists the agricultural and industrial revolutions as past "singularities". Extrapolating from such past events, Hanson proposes that the next economic singularity should increase economic growth between 60 and 250 times. An innovation that allowed for the replacement of virtually all human labor could trigger this event.[21]

Eliezer Yudkowsky has suggested[1] that many of the different definitions that have been assigned to singularity are mutually incompatible rather than mutually supporting. For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or smarter-than-human intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, whose stated mission is "to assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies in order to address humanity’s grand challenges."[22] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA's Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year. Program faculty include experts in technology, finance, and future studies, and a number of videos of Singularity University sessions have been posted online.

Some prominent technologists such as Bill Joy, founder of Sun Microsystems, have voiced concern over the potential dangers of the Singularity.(Joy 2000)

In 2010, physician-author Brandon Colby, MD discussed in his book Outsmart Your Genes the association between the genetic revolution and the technological singularity.[23] Dr. Colby expressed his views pertaining to the ways in which predictive medicine and genetic technology will impact health care, longevity, and society over the next few years as well as into the next decade.[24]

Intelligence explosion

Good (1965) speculated on the effects of machines smarter than humans:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Most proposed methods for creating smarter-than-human or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The means speculated to produce intelligence augmentation are numerous, and include bio- and genetic engineering, nootropic drugs, AI assistants, direct brain-computer interfaces, and mind uploading. The existence of multiple paths to a intelligence-explosion make a singularity more likely; for a singularity to not occur they would all have to fail.[3]

Despite the numerous speculated means for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option for organizations trying to advance the singularity. Hanson (1998) is also skeptical of human intelligence augmentation, writing that once one has exhausted the "low-hanging fruit" of easy methods for increasing human intelligence, further improvements will become increasingly difficult to find.

Whether or not an intelligence explosion occurs depends on three factors.[25] The first, accelerating factor, is the new intelligence enhancements made possible by each previous improvement. Contrawise, as the intelligences become more advanced, further advances will become more and more complicated, possibly overcoming the advantage of increased intelligence. Each improvement must be able to beget at least one more improvement, on average, for the singularity to continue. Finally, there is the issue of a hard upper limit. Absent Quantum Computing, eventually the laws of physics will prevent any further improvements.

There are two logically independent, but mutually reinforcing, accelerating affects: increases in the speed of computation, and improvements to the algorithms used.[26] The former is predicted by Moore’s Law and the forecast improvements in hardware,[27] and is comparatively similar to previous technological advance. On the other hand, most AI researchers believe that software is more important than hardware. There is little reason to expect evolution to have optimised human brains for intelligence,[26] suggesting there are low-hanging fruit on the software side.[28]

Speed improvements

The first is the improvements to the speed at which minds can be run. Whether human or AI, better hardware increases the rate of future hardware improvements. Simplistically, Moore's Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity.[29] An upper limit on speed may eventually be reached, though it is unclear how high this would be Hawkins (2008), responding to Good, argued that the upper limit is relatively low,

Belief in this idea is based on a naive understanding of what intelligence is. As an analogy, imagine we had a computer that could design new computers (chips, systems, and software) faster than itself. Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build? No. It might accelerate the rate of improvements for a while, but in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity.

Whereas if it were a lot higher than current human levels of intelligence, the effects of the singularity would be enormous enough as to be indistinguishable (to humans) from a singularity with an upper limit. For example, if the speed of thought could be speeded up by a million-to-one, a subjective year would pass in 30 physical seconds.[3]

It is difficult to directly compare silicon-based hardware with neurons. But Berglas (2008) notes that computer speech recognition is approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude as powerful as the human brain.

Intelligence improvements

Some intelligence technologies, like seed AI, also have the potential to make themselves more intelligent, not just faster, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on.

This mechanism for an intelligence explosion differs from an increase in speed in two ways. Firstly, it doesn’t require external effect: machines designing faster hardware still require humans to create the improved hardware, or to program factories appropriately. An AI which was re-writing its own source code, however, could do so while contained in an AI box.

Secondly, as with Vernor Vinge’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual improvements in intelligence would be qualitatively different. Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done so, and in totally different ways. Similarly, the evolution of life had been a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change be as different again.[30]

There are substantial dangers associated with an intelligence explosion singularity. Firstly, the goal structure of the AI may not be invariant under self-improvement, potentially causing the AI to optimise something other than was intended.[31][32] Secondly, AIs could have other uses for the scarce resources mankind uses to survive.[33]

While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.[2][28][34]

Impact

Dramatic changes in the rate of economic growth have occurred in the past because of some technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. This new agricultural economy began to double every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world’s economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligences causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.[21]

Existential risk

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[35]

Superhuman intelligences may have goals inconsistent with human survival and prosperity. Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. In the same way that evolution has no inherent tendency to produce outcomes valued by humans, so too there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than paperclipping the universe.[36][37][38] AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[33][39] and humans would be powerless to stop them.[40]

Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

Alternatively, AIs developed under evolutionary pressure to promote their own survival could out-compete humanity.[41] One approach to prevent a negative singularity is an AI box, whereby the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world. Such a box would have extremely proscribed inputs and outputs; maybe only a plaintext channel. However, a sufficient intelligent AI may simply be able to escape from any box we can create. For example, it might crack the protein folding problem and use nanotechnology to escape, or simply persuade its human ‘keepers’ to let it out.[35][42][43]

Eliezer Yudkowsky proposed that research be undertaken to produce friendly artificial intelligence in order to address the dangers. He noted that if the first real AI was friendly it would have a head start on self-improvement and thus prevent other unfriendly AIs from developing, as well as providing enormous benefits to mankind.[28] The Singularity Institute for Artificial Intelligence is dedicated to this cause.

A significant problem, however, is that unfriendly artificial intelligence is likely to be much easier to create than FAI: while both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI will transform itself into something unfriendly) and a goal structure that aligns with human values and doesn’t automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which doesn't need to be invariant under self-modification.[44]

Bill Hibbard also addresses issues of AI safety and morality in his book Super-Intelligent Machines.

Implications for human society

In 2009, leading computer scientists, artificial intelligence researchers, and roboticists met at the Asilomar Conference Grounds near Monterey Bay in California to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards. Some machines have acquired various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and have achieved "cockroach intelligence." The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[45]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[46] A United States Navy report indicates that, as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[47][48]

The Association for the Advancement of Artificial Intelligence has commissioned a study to examine this issue,[49] pointing to programs like the Language Acquisition Device, which can emulate human interaction.

Many Singularitarians consider nanotechnology to be one of the greatest dangers facing humanity. For this reason, they often believe that seed AI (an AI capable of making itself smarter) should precede nanotechnology. Others, such as the Foresight Institute, advocate the creation of molecular nanotechnology, which they claim can be made safe for pre-singularity use or expedite the arrival of a beneficial singularity.

Some support the design of "friendly artificial intelligence", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[50]

Isaac Asimov's Three Laws of Robotics is one of the earliest examples of proposed safety measures for AI:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with either the First or Second Law.

The laws are intended to prevent artificially intelligent robots from harming humans. In Asimov’s stories, any perceived problems with the laws tend to arise as a result of a misunderstanding on the part of some human operator; the robots themselves are merely acting to their best interpretation of their rules. In the 2004 film I, Robot, loosely based on Asimov's Robot stories, an AI attempts to take complete control over humanity for the purpose of protecting humanity from itself due to an extrapolation of the Three Laws. In 2004, the Singularity Institute launched an Internet campaign called 3 Laws Unsafe to raise awareness of AI safety issues and the inadequacy of Asimov’s laws in particular. (Singularity Institute for Artificial Intelligence 2004)

Accelerating change

According to Kurzweil, his logarithmic graph of 15 lists of paradigm shifts for key historic events shows an exponential trend. The lists' compilers include Carl Sagan, Paul D. Boyer, Encyclopædia Britannica, American Museum of Natural History, and University of Arizona. Click to enlarge.

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term "singularity" in the context of technological progress, Stanislaw Ulam (1958) tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

Hawkins (1983) writes that "mindsteps", dramatic and irreversible changes to paradigms or world views, are accelerating in frequency as quantified in his mindstep equation. He cites the inventions of writing, mathematics, and the computer as examples of such changes.

Ray Kurzweil's analysis of history concludes that technological progress follows a pattern of exponential growth, following what he calls The Law of Accelerating Returns. He generalizes Moore's law, which describes geometric growth in integrated semiconductor complexity, to include technologies from far before the integrated circuit.

Whenever technology approaches a barrier, Kurzweil writes, new technologies will cross it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history".(Kurzweil 2001) Kurzweil believes that the singularity will occur before the end of the 21st century, setting the date at 2045 (Kurzweil 2005). His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.

This leads to the conclusion that an artificial intelligence that is capable of improving on its own design is also faced with a singularity. Self-augmentation or bootstrapping of intelligence is featured by Dan Simmons in his novel Hyperion, where a collection of artificial intelligences debate whether or not to make themselves obsolete by creating a new generation of "ultimate" intelligence.

The Acceleration Studies Foundation, an educational non-profit foundation founded by John Smart, engages in outreach, education, research and advocacy concerning accelerating change.(Acceleration Studies Foundation 2007) It produces the Accelerating Change conference at Stanford University, and maintains the educational site Acceleration Watch.

Presumably, a technological singularity would lead to rapid development of a Kardashev Type I civilization, where a Kardashev Type I civilization is one that has achieved mastery of the resources of its home planet, Type II of its planetary system, and Type III of its galaxy.[51]

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's Wired magazine article "Why the future doesn't need us".(Joy 2000)

Criticism

Steven Pinker stated in 2008:[15]

"(...) There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles — all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. (...)"

Some critics go so far as to assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[52] Theodore Modis[53] and Jonathan Huebner[54] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining (John Smart, however, criticizes Huebner's analysis.[55]) Some evidence for this decline is that the rise in computer clock speeds is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[56]

Others propose that other "singularities" can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[57]

In The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's law to 19th-century computers. Schmidhuber (2006) suggests differences in memory of recent and distant events create an illusion of accelerating change, and that such phenomena may be responsible for past apocalyptic predictions.

Andrew Kennedy, in his 2006 paper for the British Interplanetary Society discussing change and the growth in space travel velocities,[58] stated that although long-term overall growth is inevitable, it is small, embodying both ups and downs, and noted, "New technologies follow known laws of power use and information spread and are obliged to connect with what already exists. Remarkable theoretical discoveries, if they end up being used at all, play their part in maintaining the growth rate: they do not make its plotted curve... redundant." He stated that exponential growth is no predictor in itself, and illustrated this with examples such as quantum theory. The quantum was conceived in 1900, and quantum theory was in existence and accepted approximately 25 years later. However, it took over 40 years for Richard Feynman and others to produce meaningful numbers from the theory. Bethe understood nuclear fusion in 1935, but 75 years later fusion reactors are still a dream. Similarly, entanglement was understood in 1935 but not at the point of being used in practice until the 21st century. Kennedy concludes that "the probability of a discovery in any one sector contributing, on its own, to a sudden radical departure from the overall growth rate is not likely."

A study of patents per thousand persons shows that human creativity does not show accelerating returns, but in fact—as suggested by Joseph Tainter in his seminal The Collapse of Complex Societies[59]—a law of diminishing returns. The number of patents per thousand peaked in the period from 1850–1900, and has been declining since. The growth of complexity eventually becomes self-limiting, and leads to a wide spread "general systems collapse". Thomas Homer Dixon in The Upside of Down: Catastrophe, Creativity and the Renewal of Civilization maintains that the declining energy returns on investment has led to the collapse of civilizations. Jared Diamond in Collapse: How Societies Choose to Fail or Succeed also shows that cultures self-limit when they exceed the sustainable carrying capacity of their environment, and the consumption of strategic resources (frequently timber, soils or water) creates a deleterious positive feedback loop that leads eventually to social collapse and technological retrogression.

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily.[60]

The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[61]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future postulates a "technology paradox" in that before the Singularity could occur, most routine jobs in the economy would be automated since this would require a level of technology inferior to that of the Singularity. This would cause massive unemployment and plummeting consumer demand--which in turn would destroy the incentive to invest in the technologies that would be required to bring on the Singularity.

In popular culture

James P. Hogan's 1979 novel The Two Faces of Tomorrow is an explicit description of what is now called the Singularity. An artificial intelligence system solves an excavation problem on the moon in a brilliant and novel way, but nearly kills a work crew in the process. Realizing that systems are becoming too sophisticated and complex to predict or manage, a scientific team sets out to teach a sophisticated computer network how to think more humanly. The story documents the rise of self-awareness in the computer system, the humans' loss of control and failed attempts to shut down the experiment as the computer desperately defends itself, and the computer intelligence reaching maturity.

While discussing the singularity's growing recognition, Vernor Vinge (1993) writes that "it was the science-fiction writers who felt the first concrete impact." In addition to his own short story "Bookworm, Run!", whose protagonist is a chimpanzee with intelligence augmented by a government experiment, he cites Greg Bear's novel Blood Music (1983) as an example of the singularity in fiction. Vinge described surviving the singularity in his 1986 novel Marooned in Realtime. Vinge later expanded the notion of the singularity to a galactic scale in A Fire Upon the Deep (1992), a novel populated by transcendent beings, each the product of a different race and possessed of distinct agendas and overwhelming power.

In William Gibson's 1984 novel Neuromancer, artificial intelligences capable of improving their own programs are strictly regulated by special "Turing police" to ensure they never exceed a certain level of intelligence, and the plot centers on the efforts of one such AI to circumvent their control. The 1994 novel The Metamorphosis of Prime Intellect features an AI that augments itself so quickly as to gain low-level control of all matter in the universe in a matter of hours.

A more malevolent AI achieves similar levels of omnipotence in Harlan Ellison's short story I Have No Mouth, and I Must Scream (1967).

William Thomas Quick's novels Dreams of Flesh and Sand (1988), Dreams of Gods and Men (1989), and Singularities (1990) present an account of the transition through the singularity; in the latter novel, one of the characters states that mankind's survival requires it to integrate with the emerging machine intelligences, or it will be crushed under the dominance of the machines – the greatest risk to the survival of a species reaching this point (and alluding to large numbers of other species that either survived or failed this test, although no actual contact with alien species occurs in the novels).

The singularity is sometimes addressed in fictional works to explain the event's absence. Neal Asher's Gridlinked series features a future where humans living in the Polity are governed by AIs and while some are resentful, most believe that they are far better governors than any human. In the fourth novel, Polity Agent, it is mentioned that the singularity is far overdue yet most AIs have decided not to partake in it for reasons that only they know. A flashback character in Ken MacLeod's 1998 novel The Cassini Division dismissively refers to the singularity as "the Rapture for nerds", though the singularity goes on to happen anyway.

Popular movies in which computers become intelligent and violently overpower the human race include Colossus: The Forbin Project, the Terminator series, the very loose film adaptation of I, Robot, and The Matrix series. The television series Battlestar Galactica also explores these themes.

Isaac Asimov expressed ideas similar to a post-Kurzweilian singularity in his short story The Last Question. Asimov's future envisions a reality where a combination of strong artificial intelligence and post-humans consume the cosmos, during a time Kurzweil describes as when "the universe wakes up", the last of his six stages of cosmic evolution as described in The Singularity is Near. Post-human entities throughout various time periods of the story inquire of the artificial intelligence within the story as to how entropy death will be avoided. The AI responds that it lacks sufficient information to come to a conclusion, until the end of the story when the AI does indeed arrive at a solution. Notably, it does so in order to fulfill its duty to answer the humans' question.

St. Edward's University chemist Eamonn Healy discusses accelerating change in the film Waking Life. He divides history into increasingly shorter periods, estimating "two billion years for life, six million years for the hominid, a hundred-thousand years for mankind as we know it". He proceeds to human cultural evolution, giving time scales of ten thousand years for agriculture, four hundred years for the scientific revolution, and one hundred fifty years for the industrial revolution. Information is emphasized as providing the basis for the new evolutionary paradigm, with artificial intelligence its culmination. He concludes we will eventually create "neohumans" which will usurp humanity’s present role in scientific and technological progress and allow the exponential trend of accelerating change to continue past the limits of human ability.

Accelerating progress features in some science fiction works, and is a central theme in Charles Stross's Accelerando. Other notable authors that address singularity-related issues include Karl Schroeder, Greg Egan, Ken MacLeod, Rudy Rucker, David Brin, Iain M. Banks, Neal Stephenson, Tony Ballantyne, Bruce Sterling, Dan Simmons, Damien Broderick, Fredric Brown, Jacek Dukaj, Stanislav Lem, Nagaru Tanigawa, Douglas Adams and Ian McDonald.

The feature-length documentary film Transcendent Man is based on Ray Kurzweil and his book The Singularity Is Near. The film documents Kurzweil's quest to reveal what he believes to be mankind's destiny.

In 2009, scientists at Aberystwyth University in Wales and the U.K's University of Cambridge designed a robot called Adam that they believe to be the first machine to independently discover new scientific findings.[62] Also in 2009, researchers at Cornell developed a computer program that extrapolated the laws of motion from a pendulum's swings.[63][64]

The web comic Dresden Codak deals with trans-humanistic themes and the singularity.

See also

  • James John Bell
  • Development criticism
  • Doomsday argument
  • Full genome sequencing
  • Hans Moravec
  • Hyperbolic growth
  • List of emerging technologies
  • Logarithmic timeline and detailed logarithmic timeline
  • Max More
  • Molecular engineering
  • Novelty theory
  • Omega Point
  • Predictive medicine
  • Simulated reality
  • Singularitarianism
  • Superintelligence
  • Technological determinism
  • Technological evolution
  • Techno-utopianism
  • Tipping point

Notes

  1. 1.0 1.1 Yudkowsky, Eliezer. The Singularity: Three Major Schools
  2. 2.0 2.1 http://www.kurzweilai.net/max-more-and-ray-kurzweil-on-the-singularity-2
  3. 3.0 3.1 3.2 http://singinst.org/overview/whatisthesingularity
  4. Good, I. J. "Speculations Concerning the First Ultraintelligent Machine", Advances in Computers, vol. 6, 1965.
  5. Ehrlich, Paul. The Dominant Animal: Human Evolution and the Environment
  6. Superbrains born of silicon will change everything.
  7. Good, I. J., "Speculations Concerning the First Ultraintelligent Machine", Franz L. Alt and Morris Rubinoff, ed., Advances in Computers (Academic Press) 6: 31–88, 1965.
  8. The Human Importance of the Intelligence Explosion
  9. Good, I. J. 1965 Speculations Concerning the First Ultraintelligent Machine. Pp 31-88 in Advances in Computers, 6, F. L. Alt and M Rubinoff, eds. New York: Academic Press.
  10. Ray Kurzweil, The Singularity is Near, Penguin Group, 2005
  11. 11.0 11.1 Artificial Intelligence as a Positive and Negative Factor in Global Risk, Global Catastrophic Risks, Oxford University Press, 2008, Nick Bostrom and Milan M. Cirkovic
  12. The Uncertain Future; a future technology and world-modeling project
  13. GLOBAL CATASTROPHIC RISKS SURVEY (2008) Technical Report 2008/1 Published by Future of Humanity Institute, Oxford University. Anders Sandberg and Nick Bostrom
  14. Existential Risks; Analyzing Human Extinction Scenarios and Related Hazards, Nick Bostrom
  15. 15.0 15.1 http://spectrum.ieee.org/computing/hardware/tech-luminaries-address-singularity
  16. http://spectrum.ieee.org/computing/hardware/whos-who-in-the-singularity
  17. Ulam, S., Tribute to John von Neumann, Bulletin of the American Mathematical Society, vol 64, nr 3, part 2, May, 1958, p1-49.
  18. Solomonoff, R.J. "The Time Scale of Artificial Intelligence: Reflections on Social Effects," Human Systems Management, Vol 5, pp. 149-153, 1985, http://world.std.com/~rjs/timesc.pdf.
  19. Vinge, Vernor. "The Coming Technological Singularity: How to Survive in the Post-Human Era"
  20. de Grey, Aubrey. The singularity and the Methuselarity: similarities and differences
  21. 21.0 21.1 Robin Hanson, "Economics Of The Singularity", IEEE Spectrum Special Report: The Singularity, http://www.spectrum.ieee.org/robotics/robotics-software/economics-of-the-singularity, retrieved 2008-09-11 
  22. About Singularity University at its official website
  23. http://www.prweb.com/releases/OutsmartYourGenes/DrBrandonColby/prweb3838114.htm
  24. http://www.outsmartyourgenes.com
  25. [1] David Chalmers John Locke Lecture, 10 May, Exam Schools, Oxford, presenting a philosophical analysis of the possibility of a technological singularity or "intelligence explosion" resulting from recursively self-improving AI. ]
  26. 26.0 26.1 The Singularity: A Philosophical Analysis, David J. Chalmers
  27. http://www.itrs.net/Links/2007ITRS/ExecSum2007.pdf
  28. 28.0 28.1 28.2 http://singinst.org/riskintro/index.html
  29. Eliezer Yudkowsky, 1996 “Staring at the Singularity
  30. http://yudkowsky.net/singularity/power
  31. Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008
  32. http://www.kurzweilai.net/artificial-general-intelligence-now-is-the-time
  33. 33.0 33.1 Omohundro, Stephen M., "The Nature of Self-Improving Artificial Intelligence." Self-Aware Systems. 21 Jan. 2008. Web. 07 Jan. 2010.
  34. Bostrom, Nick, The Future of Human Evolution, Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy, p. 339–371, 2004, Ria University Press.
  35. 35.0 35.1 Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk, Global Catastrophic Risks, Oxford University Press, 2008
  36. http://singinst.org/upload/artificial-intelligence-risk.pdf
  37. The Stamp Collecting Device, Nick Hay
  38. Ethical Issues in Advanced Artificial Intelligence, Nick Bostrom, in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12-17
  39. Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.
  40. de Garis, Hugo. "The Coming Artilect War", Forbes.com, 22 June 2009.
  41. Nick, The Future of Human Evolution, Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy, p. 339–371, 2004, Ria University Press.
  42. Artificial Intelligence Will Kill Our Grandchildren (Singularity), Dr Anthony Berglas
  43. The Singularity: A Philosophical Analysis David J. Chalmers
  44. Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004
  45. Scientists Worry Machines May Outsmart Man By JOHN MARKOFF, NY Times, July 26, 2009.
  46. Call for debate on killer robots, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09.
  47. Mick, Jason. New Navy-funded Report Warns of War Robots Going "Terminator", Blog, dailytech.com, February 17, 2009.
  48. Flatley, Joseph L. Navy report warns of robot uprising, suggests a strong moral compass, engadget.com, 18 February 2009.
  49. AAAI Presidential Panel on Long-Term AI Futures 2008-2009 Study, Association for the Advancement of Artificial Intelligence, Accessed 7/26/09.
  50. Article at Asimovlaws.com, July 2004, accessed 7/27/2009.
  51. Zubrin, Robert. 1999, Entering Space - Creating a Spacefaring Civilization
  52. Dreyfus & Dreyfus 2000, p. xiv:

    "(...) The truth is that human intelligence can never be replaced with machine intelligence simply because we are not ourselves "thinking machines" in the sense in which that term is commonly understood.Hawking (1998) (...)"

    Some people say that computers can never show true intelligence whatever that may be. But it seems to me that if very complicated chemical molecules can operate in humans to make them intelligent then equally complicated electronic circuits can also make computers act in an intelligent way. And if they are intelligent they can presumably design computers that have even greater complexity and intelligence.

  53. Theodore Modis, Forecasting the Growth of Complexity and Change, Technological Forecasting & Social Change, 69, No 4, 2002
  54. Huebner, Jonathan (2005) A Possible Declining Trend for Worldwide Innovation, Technological Forecasting & Social Change, October 2005, pp. 980-6
  55. Smart, John (September 2005), On Huebner Innovation, Acceleration Studies Foundation, http://accelerating.org/articles/huebnerinnovation.html, retrieved on 2007-08-07
  56. Krazit, Tom. Intel pledges 80 cores in five years, CNET News, 26 September 2006.
  57. See, e.g., Korotayev A., Malkov A., Khaltourina D. Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow: URSS Publishers, 2006; Korotayev A. V. A Compact Macromodel of World System Evolution // Journal of World-Systems Research 11/1 (2005): 79–93.
  58. Interstellar Travel: The Wait Calculation and the Incentive Trap of Progress, JBIS Vol 59, N.7 July 2006
  59. Tainter, Joseph (1988) "The Collapse of Complex Societies" (Cambridge University Press)
  60. Myers, PZ, Singularly Silly Singularity, http://scienceblogs.com/pharyngula/2009/02/singularly_silly_singularity.php, retrieved 2009-04-13 
  61. Anonymous (18 March 2006), "More blades good", The Economist (London) 378 (8469): 85, http://www.economist.com/science/displaystory.cfm?story_id=5624861 
  62. Robo-scientist makes gene discovery-on its own | Crave - CNET
  63. Computer Program Self-Discovers Laws of Physics | Wired Science | Wired.com
  64. Cornell Chronicle: Computer derives natural laws

References

External links

Essays

Singularity AI projects

Fiction

Other links